463 research outputs found

    Strategic Proportionality: Limitations on the Use of Force in Modern Armed Conflicts

    Get PDF
    The nature of modern armed conflicts, combined with traditional interpretations of proportionality, poses serious challenges to the jus ad bellum goal of limiting and controlling wars. In between the jus ad bellum focus on decisions to use force, and the international humanitarian law (IHL) regulation of specific attacks, there is a far-reaching space in which the regulatory role of international law is bereft of much needed clarity. Perhaps the most striking example is in relation to overall casualties of war. If the jus ad bellum is understood as applying to the opening moments of the conflict, then it cannot provide a solution to growing numbers of casualties later in the conflict. Moreover, if it does not apply to non-international armed conflicts, then it is of little use in relation to alleviating the suffering of war for a vast proportion of conflicts in the past half a century and more. IHL is equally unsuited for dealing with overall casualties, as it may be the case that each individual attack is proportionate, but the cumulative number of civilians being killed is slowly rising to intolerable figures. A similar problem arises with regard to assessing other forms of accumulated destruction. This article sets out a new approach to proportionality in armed conflict and the regulation of war. It advocates for a principle of “strategic proportionality,” stemming from general principles of international law and reflected in state practice, and which requires an ongoing assessment throughout the conflict balancing the overall harm against the strategic objectives. The article traces the historical development and aims of the principle of proportionality in war, sets out the scope and aims of strategic proportionality, and provides an analysis of how such a principle can be operationalized in practice

    Terminating a common envelope jets supernova impostor event with a super-Eddington blue supergiant

    Full text link
    We conducted one-dimensional stellar evolutionary numerical simulations to build blue supergiant stellar models with a very low-envelope mass and a super-Eddington luminosity of 10^7Lo that mimic the last phase of a common envelope evolution (CEE) where a neutron star (NS) accretes mass from the envelope and launches jets that power the system. Common envelope jets supernovae (CEJSNe) are CEE transient events where a NS spirals-in inside the envelope and then the core of a red supergiant (RSG) star accretes mass and launches jets that power the transient event. In case that the NS (or black hole) does not enter the core of the RSG the event is a CEJSN-impostor. We propose that in some cases a CEJSN-impostor event might end with such a phase of a blue supergiant lasting for several years to few tens of years. The radius of the blue supergiant is about tens to few hundreds solar radii. We use a simple prescription to deposit the jets energy into the envelope. We find that the expected accretion rate of envelope mass onto the NS at the end of the CEE allows the power of the jets to be as we assume, 10^7Lo. Such a low-mass envelope might be the end of the RSG envelope, or might be a rebuilt envelope from mass fallback. Our study of a blue supergiant at the termination of a CEJSN-impostor event adds to the rich variety of transients that CEJSNe and CEJSN-impostors might form.Comment: Accepted for publication in MNRA

    The depletion of the red supergiant envelope radiative zone during common envelope evolution

    Full text link
    We conduct one-dimensional stellar evolution simulations of red supergiant (RSG) stars that mimic common envelope evolution (CEE) and find that the inner boundary of the envelope convective zone moves into the initial envelope radiative zone. The envelope convection practically disappears only when the RSG radius decreases by about an order of magnitude or more. The implication is that one cannot split the CEE into one stage during which the companion spirals-in inside the envelope convective zone and removes it, and a second slower phase when the companion orbits the initial envelope radiative zone and a stable mass transfer takes place. At best, this might take place when the orbital separation is about several solar radii. However, by that time other processes become important. We conclude that as of yet, the commonly used alpha-formalism that is based on energy considerations is the best phenomenological formalism.Comment: Research in Astronomy and Astrophysics, in pres

    On the Ability of Graph Neural Networks to Model Interactions Between Vertices

    Full text link
    Graph neural networks (GNNs) are widely used for modeling complex interactions between entities represented as vertices of a graph. Despite recent efforts to theoretically analyze the expressive power of GNNs, a formal characterization of their ability to model interactions is lacking. The current paper aims to address this gap. Formalizing strength of interactions through an established measure known as separation rank, we quantify the ability of certain GNNs to model interaction between a given subset of vertices and its complement, i.e. between sides of a given partition of input vertices. Our results reveal that the ability to model interaction is primarily determined by the partition's walk index -- a graph-theoretical characteristic that we define by the number of walks originating from the boundary of the partition. Experiments with common GNN architectures corroborate this finding. As a practical application of our theory, we design an edge sparsification algorithm named Walk Index Sparsification (WIS), which preserves the ability of a GNN to model interactions when input edges are removed. WIS is simple, computationally efficient, and markedly outperforms alternative methods in terms of induced prediction accuracy. More broadly, it showcases the potential of improving GNNs by theoretically analyzing the interactions they can model

    Implicit Regularization in Hierarchical Tensor Factorization and Deep Convolutional Neural Networks

    Full text link
    In the pursuit of explaining implicit regularization in deep learning, prominent focus was given to matrix and tensor factorizations, which correspond to simplified neural networks. It was shown that these models exhibit an implicit tendency towards low matrix and tensor ranks, respectively. Drawing closer to practical deep learning, the current paper theoretically analyzes the implicit regularization in hierarchical tensor factorization, a model equivalent to certain deep convolutional neural networks. Through a dynamical systems lens, we overcome challenges associated with hierarchy, and establish implicit regularization towards low hierarchical tensor rank. This translates to an implicit regularization towards locality for the associated convolutional networks. Inspired by our theory, we design explicit regularization discouraging locality, and demonstrate its ability to improve the performance of modern convolutional networks on non-local tasks, in defiance of conventional wisdom by which architectural changes are needed. Our work highlights the potential of enhancing neural networks via theoretical analysis of their implicit regularization.Comment: Accepted to ICML 202
    corecore